89 research outputs found

    Informed Network Coding for Minimum Decoding Delay

    Full text link
    Network coding is a highly efficient data dissemination mechanism for wireless networks. Since network coded information can only be recovered after delivering a sufficient number of coded packets, the resulting decoding delay can become problematic for delay-sensitive applications such as real-time media streaming. Motivated by this observation, we consider several algorithms that minimize the decoding delay and analyze their performance by means of simulation. The algorithms differ both in the required information about the state of the neighbors' buffers and in the way this knowledge is used to decide which packets to combine through coding operations. Our results show that a greedy algorithm, whose encodings maximize the number of nodes at which a coded packet is immediately decodable significantly outperforms existing network coding protocols.Comment: Proc. of the IEEE International Conference on Mobile Ad-hoc and Sensor Systems (IEEE MASS 2008), Atlanta, USA, September 200

    Effective Delay Control in Online Network Coding

    Full text link
    Motivated by streaming applications with stringent delay constraints, we consider the design of online network coding algorithms with timely delivery guarantees. Assuming that the sender is providing the same data to multiple receivers over independent packet erasure channels, we focus on the case of perfect feedback and heterogeneous erasure probabilities. Based on a general analytical framework for evaluating the decoding delay, we show that existing ARQ schemes fail to ensure that receivers with weak channels are able to recover from packet losses within reasonable time. To overcome this problem, we re-define the encoding rules in order to break the chains of linear combinations that cannot be decoded after one of the packets is lost. Our results show that sending uncoded packets at key times ensures that all the receivers are able to meet specific delay requirements with very high probability.Comment: 9 pages, IEEE Infocom 200

    Optimal Joint Routing and Scheduling in Millimeter-Wave Cellular Networks

    Full text link
    Millimeter-wave (mmWave) communication is a promising technology to cope with the expected exponential increase in data traffic in 5G networks. mmWave networks typically require a very dense deployment of mmWave base stations (mmBS). To reduce cost and increase flexibility, wireless backhauling is needed to connect the mmBSs. The characteristics of mmWave communication, and specifically its high directional- ity, imply new requirements for efficient routing and scheduling paradigms. We propose an efficient scheduling method, so-called schedule-oriented optimization, based on matching theory that optimizes QoS metrics jointly with routing. It is capable of solving any scheduling problem that can be formulated as a linear program whose variables are link times and QoS metrics. As an example of the schedule-oriented optimization, we show the optimal solution of the maximum throughput fair scheduling (MTFS). Practically, the optimal scheduling can be obtained even for networks with over 200 mmBSs. To further increase the runtime performance, we propose an efficient edge-coloring based approximation algorithm with provable performance bound. It achieves over 80% of the optimal max-min throughput and runs 5 to 100 times faster than the optimal algorithm in practice. Finally, we extend the optimal and approximation algorithms for the cases of multi-RF-chain mmBSs and integrated backhaul and access networks.Comment: To appear in Proceedings of INFOCOM '1

    Measuring the Impact of Adversarial Errors on Packet Scheduling Strategies

    Full text link
    In this paper we explore the problem of achieving efficient packet transmission over unreliable links with worst case occurrence of errors. In such a setup, even an omniscient offline scheduling strategy cannot achieve stability of the packet queue, nor is it able to use up all the available bandwidth. Hence, an important first step is to identify an appropriate metric for measuring the efficiency of scheduling strategies in such a setting. To this end, we propose a relative throughput metric which corresponds to the long term competitive ratio of the algorithm with respect to the optimal. We then explore the impact of the error detection mechanism and feedback delay on our measure. We compare instantaneous error feedback with deferred error feedback, that requires a faulty packet to be fully received in order to detect the error. We propose algorithms for worst-case adversarial and stochastic packet arrival models, and formally analyze their performance. The relative throughput achieved by these algorithms is shown to be close to optimal by deriving lower bounds on the relative throughput of the algorithms and almost matching upper bounds for any algorithm in the considered settings. Our collection of results demonstrate the potential of using instantaneous feedback to improve the performance of communication systems in adverse environments

    A Machine Learning-based Framework for Optimizing the Operation of Future Networks

    Get PDF
    5G and beyond are not only sophisticated and difficult to manage, but must also satisfy a wide range of stringent performance requirements and adapt quickly to changes in traffic and network state. Advances in machine learning and parallel computing underpin new powerful tools that have the potential to tackle these complex challenges. In this article, we develop a general machinelearning- based framework that leverages artificial intelligence to forecast future traffic demands and characterize traffic features. This makes it possible to exploit such traffic insights to improve the performance of critical network control mechanisms, such as load balancing, routing, and scheduling. In contrast to prior works that design problem-specific machine learning algorithms, our generic approach can be applied to different network functions, allowing reuse of existing control mechanisms with minimal modifications. We explain how our framework can orchestrate ML to improve two different network mechanisms. Further, we undertake validation by implementing one of these, mobile backhaul routing, using data collected by a major European operator and demonstrating a 3×reduction of the packet delay compared to traditional approaches.This work is partially supported by the Madrid Regional Government through the TAPIR-CM program (S2018/TCS-4496) and the Juan de la Cierva grant (FJCI-2017-32309). Paul Patras acknowledges the support received from the Cisco University Research Program Fund (2019-197006)

    pDCell: an End-to-End Transport Protocol for Mobile Edge Computing Architectures

    Get PDF
    Pendiente publicación 2019To deal with increasingly demanding services and the rapid growth in number of devices and traffic, 5G and beyond mobile networks need to provide extreme capacity and peak data rates at very low latencies. Consequently, applications and services need to move closer to the users into so-called edge data centers. At the same time, there is a trend to virtualize core and radio access network functionalities and bring them to edge data centers as well. However, as is known from conventional data centers, legacy transport protocols such as TCP are vastly suboptimal in such a setting. In this work, we present pDCell, a transport design for mobile edge computing architectures that extends data center transport approaches to the mobile network domain. Specifically, pDCell ensures that data traffic from application servers arrives at virtual radio functions (i.e., C-RAN Central Units) timely to (i) minimize queuing delays and (ii) to maximize cellular network utilization. We show that pDCell significantly improves flow completion times compared to conventional transport protocols like TCP and data center transport solutions, and is thus an essential component for future mobile networks.This work is partially supported by the European Research Council grant ERC CoG 617721, the Ramon y Cajal grant from the Spanish Ministry of Economy and Competitiveness RYC-2012-10788, by the European Union H2020-ICT grant 644399 (MONROE), by the H2020 collaborative Europe/Taiwan research project 5G-CORAL (grant num. 761586) and the Madrid Regional Government through the TIGRE5-CM program (S2013/ICE-2919). Further, the work of Dr. Kogan is partially supported by a grant from the Cisco University Research Program Fund, an advised fund of Silicon Valley Community Foundation.No publicad

    Behind the NAT – A Measurement Based Evaluation of Cellular Service Quality

    Get PDF
    Abstract—Mobile applications such as VoIP, (live) gaming, or video streaming have diverse QoS requirements ranging from low delay to high throughput. The optimization of the network quality experienced by end-users requires detailed knowledge of the expected network performance. Also, the achieved service quality is affected by a number of factors, including network operator and available technologies. However, most studies focusing on measuring the cellular network do not consider the performance implications of network configuration and management. To this end, this paper reports about an extensive data set of cellular network measurements, focused on analyzing root causes of mobile network performance variability. Measurements conducted over four weeks in a 4G cellular network in Germany show that management and configuration decisions have a substantial impact on the performance. Specifically, it is observed that the association of mobile devices to a Point of Presence (PoP) within the operator’s network can influence the end-to-end RTT by a large extent. Given the collected data a model predicting the PoP assignment and its resulting RTT leveraging Markov Chain and machine learning approaches is developed. RTT increases of 58% to 73% compared to the optimum performance are observed in more than 57% of the measurements
    • …
    corecore